Goto

Collaborating Authors

 early learning


Early-Learning Regularization Prevents Memorization of Noisy Labels

Neural Information Processing Systems

We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach.


Early learning of the optimal constant solution in neural networks and humans

Rubruck, Jirko, Bauer, Jan P., Saxe, Andrew, Summerfield, Christopher

arXiv.org Artificial Intelligence

Deep neural networks learn increasingly complex functions over the course of training. Here, we show both empirically and theoretically that learning of the target function is preceded by an early phase in which networks learn the optimal constant solution (OCS) - that is, initial model responses mirror the distribution of target labels, while entirely ignoring information provided in the input. Using a hierarchical category learning task, we derive exact solutions for learning dynamics in deep linear networks trained with bias terms. Even when initialized to zero, this simple architectural feature induces substantial changes in early dynamics. We identify hallmarks of this early OCS phase and illustrate how these signatures are observed in deep linear networks and larger, more complex (and nonlinear) convolutional neural networks solving a hierarchical learning task based on MNIST and CIFAR10. We explain these observations by proving that deep linear networks necessarily learn the OCS during early learning. To further probe the generality of our results, we train human learners over the course of three days on the category learning task. We then identify qualitative signatures of this early OCS phase in terms of the dynamics of true negative (correct-rejection) rates. Surprisingly, we find the same early reliance on the OCS in the behaviour of human learners. Finally, we show that learning of the OCS can emerge even in the absence of bias terms and is equivalently driven by generic correlations in the input data. Overall, our work suggests the OCS as a universal learning principle in supervised, error-corrective learning, and the mechanistic reasons for its prevalence.


Artificial intelligence in early learning: weird or warranted?

CMU School of Computer Science

Dr. David Touretzky has set out to change that. He's the founder and chair of the AI4K12 initiative, aimed at developing national guidelines for A.I. education and facilitating its instruction to students in kindergarten through 12th grade. "I looked at the national guidelines and there were just two sentences about A.I. and they were for 11th and 12th graders. I realized this was a problem," he said. Touretzky is also a Research Professor at Carnegie Mellon University and lead author of the five core concepts of A.I. education.


A Back-Propagation Algorithm with Optimal Use of Hidden Units

Chauvin, Yves

Neural Information Processing Systems

The algorithm can automatically find optimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1.


A Back-Propagation Algorithm with Optimal Use of Hidden Units

Chauvin, Yves

Neural Information Processing Systems

The algorithm can automatically find optimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1.


A Back-Propagation Algorithm with Optimal Use of Hidden Units

Chauvin, Yves

Neural Information Processing Systems

The algorithm can automatically findoptimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1.